46 research outputs found

    Proposition d'un modèle relationnel d'indexation syntagmatique : mise en oeuvre dans le système iota

    No full text
    National audienceNous présentons un modèle supportant une indexation à base de syntagmes. Cette modélisation inclut une description formelle des termes d'indexation, un processus de dérivation, une fonction de correspondance, une sémantique du langage d'indexation et une fonction de pondération de la orrespondance entre termes d'indexation. Elle met en évidence les éléments qui doivent permettre de guider la conception de Systèmes de Recherche d'Informations à base de mots composés. Nous proposons également un choix de techniques pour mettre en oeuvre ce modèle, particulièrement dans l'extraction automatique des syntagmes et dans leur pondération pour le calcul de la mesure pertinence d'un document par rapport à une requête

    Getting a Clean Shot on a Blurred Target: Improving Targeting for Strategic Scanning through Action Research in 10 French Organizations

    Get PDF
    Targeting comprises defining the part of the business environment that corresponds to organizations’ strategic objectives and priorities. Targeting is not an easy process because it includes the interaction of managers who come from different organizational units that might have a fragmentary and blurred understanding of the overall issue. Through an action research, we designed and evaluated a GSS to help managers target strategic scanning in fuzzy contexts. Evaluations through interventions in 10 French organizations allowed both participants to achieve relevant targets and researchers to propose four major improvements to targeting activities: 1) use suggested lists of actors and topics as starting points to trigger and facilitate discussions, 2) define actor and topic importance to produce useful targeting results, 3) evaluate the organization’s perceived capacity to be informed early enough, and 4) define a mechanism to signal scanning relevancy in the short, mid-, or long term. From a management perspective, our results help managers in their strategic scanning activity by 1) identifying information needs for strategically scanning fuzzy subjects, 2) reducing risk of strategic scanning failure, 3) enabling organizations to assess their scanning capabilities, 4) identifying scanning priorities according to a temporal horizon, and 5) fostering teamwork participation

    QUARITE (quality of care, risk management and technology in obstetrics): a cluster-randomized trial of a multifaceted intervention to improve emergency obstetric care in Senegal and Mali

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Maternal and perinatal mortality are major problems for which progress in sub-Saharan Africa has been inadequate, even though childbirth services are available, even in the poorest countries. Reducing them is the aim of two of the main Millennium Development Goals. Many initiatives have been undertaken to remedy this situation, such as the Advances in Labour and Risk Management (ALARM) International Program, whose purpose is to improve the quality of obstetric services in low-income countries. However, few interventions have been evaluated, in this context, using rigorous methods for analyzing effectiveness in terms of health outcomes. The objective of this trial is to evaluate the effectiveness of the ALARM International Program (AIP) in reducing maternal mortality in referral hospitals in Senegal and Mali. Secondary goals include evaluation of the relationships between effectiveness and resource availability, service organization, medical practices, and satisfaction among health personnel.</p> <p>Methods/Design</p> <p>This is an international, multi-centre, controlled cluster-randomized trial of a complex intervention. The intervention is based on the concept of evidence-based practice and on a combination of two approaches aimed at improving the performance of health personnel: 1) Educational outreach visits; and 2) the implementation of facility-based maternal death reviews.</p> <p>The unit of intervention is the public health facility equipped with a functional operating room. On the basis of consent provided by hospital authorities, 46 centres out of 49 eligible were selected in Mali and Senegal. Using randomization stratified by country and by level of care, 23 centres will be allocated to the intervention group and 23 to the control group. The intervention will last two years. It will be preceded by a pre-intervention one-year period for baseline data collection. A continuous clinical data collection system has been set up in all participating centres. This, along with the inventory of resources and the satisfaction surveys administered to the health personnel, will allow us to measure results before, during, and after the intervention. The overall rate of maternal mortality measured in hospitals during the post-intervention period (Year 4) is the primary outcome. The evaluation will also include cost-effectiveness.</p> <p>Trial Registration</p> <p>The QUARITE trial is registered on the Current Controlled Trials website under the number ISRCTN46950658 <url>http://www.controlled-trials.com/</url>.</p

    The evolving SARS-CoV-2 epidemic in Africa: Insights from rapidly expanding genomic surveillance

    Get PDF
    INTRODUCTION Investment in Africa over the past year with regard to severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) sequencing has led to a massive increase in the number of sequences, which, to date, exceeds 100,000 sequences generated to track the pandemic on the continent. These sequences have profoundly affected how public health officials in Africa have navigated the COVID-19 pandemic. RATIONALE We demonstrate how the first 100,000 SARS-CoV-2 sequences from Africa have helped monitor the epidemic on the continent, how genomic surveillance expanded over the course of the pandemic, and how we adapted our sequencing methods to deal with an evolving virus. Finally, we also examine how viral lineages have spread across the continent in a phylogeographic framework to gain insights into the underlying temporal and spatial transmission dynamics for several variants of concern (VOCs). RESULTS Our results indicate that the number of countries in Africa that can sequence the virus within their own borders is growing and that this is coupled with a shorter turnaround time from the time of sampling to sequence submission. Ongoing evolution necessitated the continual updating of primer sets, and, as a result, eight primer sets were designed in tandem with viral evolution and used to ensure effective sequencing of the virus. The pandemic unfolded through multiple waves of infection that were each driven by distinct genetic lineages, with B.1-like ancestral strains associated with the first pandemic wave of infections in 2020. Successive waves on the continent were fueled by different VOCs, with Alpha and Beta cocirculating in distinct spatial patterns during the second wave and Delta and Omicron affecting the whole continent during the third and fourth waves, respectively. Phylogeographic reconstruction points toward distinct differences in viral importation and exportation patterns associated with the Alpha, Beta, Delta, and Omicron variants and subvariants, when considering both Africa versus the rest of the world and viral dissemination within the continent. Our epidemiological and phylogenetic inferences therefore underscore the heterogeneous nature of the pandemic on the continent and highlight key insights and challenges, for instance, recognizing the limitations of low testing proportions. We also highlight the early warning capacity that genomic surveillance in Africa has had for the rest of the world with the detection of new lineages and variants, the most recent being the characterization of various Omicron subvariants. CONCLUSION Sustained investment for diagnostics and genomic surveillance in Africa is needed as the virus continues to evolve. This is important not only to help combat SARS-CoV-2 on the continent but also because it can be used as a platform to help address the many emerging and reemerging infectious disease threats in Africa. In particular, capacity building for local sequencing within countries or within the continent should be prioritized because this is generally associated with shorter turnaround times, providing the most benefit to local public health authorities tasked with pandemic response and mitigation and allowing for the fastest reaction to localized outbreaks. These investments are crucial for pandemic preparedness and response and will serve the health of the continent well into the 21st century

    Combining Text Mining and NLP for Information Retrieval

    No full text
    International audienceno abstrac

    Extraction et impact des connaissances sur les performances des systèmes de recherche d'information

    No full text
    An information retrieval system is dedicated to find the best possible results in a rich information context. Our study is interested in the knowledge which can be extracted from textual documents contents by associating a linguistic approach to the capacity of a statistical approach to analyze big corpus. The statistical approach is based on Text Data Mining, more precisely on the association rule technique. The linguistic approach is based on noun phrases considered as more adequate to represent document content than single words. It clarifies the needed linguistic constraints for the extraction of noun phrases and explicits the syntagmatic relations between words in noun phrases. These phrasal relations are exploited to structure noun phrases. A measure, namely ``information quantity'', is proposed to estimate the suggestive power of every noun phrase, to filter and compare noun phrases. The proposed model demonstrates that the combination of a statistical approach and a linguistic approach refines the extracted knowledge and increases the performances of an information retrieval system.Dans un contexte riche d'information, un système de recherche d'information doit être capable de trouver les meilleurs résultats possibles dans un océan d'information. Notre étude s'intéresse aux connaissances qui peuvent être extraites du contenu textuel des documents en associant la finesse d'analyse d'une approche linguistique (extraction et structuration) à la capacité d'une approche statistique de traiter de gros corpus. L'approche statistique se base sur la fouille de données textuelles et principalement la technique de règles d'association. L'approche linguistique se base sur les syntagmes nominaux que nous considérons comme des entités textuelles plus susceptibles de représenter l'information contenue dans le texte que les termes simples. Elle explicite les contraintes linguistiques nécessaires à l'extraction des syntagmes nominaux et défini les rapports syntagmatiques entre les composantes d'un syntagme nominal. Ces relations syntagmatiques sont exploitées pour la structuration des syntagmes nominaux. Une mesure, appelée ``quantité d'information'', est proposée pour évaluer le pouvoir évocateur de chaque syntagme nominal, filtrer et comparer les syntagmes nominaux. Le modèle proposé démontre que la combinaison d'une approche statistique et d'une approche linguistique affine les connaissances extraites et améliore les performances d'un système de recherche d'information

    Utilisation de Fouille de Données pour l'Indexation Automatique des Images

    No full text
    International audienceno abstrac
    corecore